Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 2.938
Filter
1.
MedEdPORTAL ; 20: 11396, 2024.
Article in English | MEDLINE | ID: mdl-38722734

ABSTRACT

Introduction: People with disabilities and those with non-English language preferences have worse health outcomes than their counterparts due to barriers to communication and poor continuity of care. As members of both groups, people who are Deaf users of American Sign Language have compounded health disparities. Provider discomfort with these specific demographics is a contributing factor, often stemming from insufficient training in medical programs. To help address these health disparities, we created a session on disability, language, and communication for undergraduate medical students. Methods: This 2-hour session was developed as a part of a 2020 curriculum shift for a total of 404 second-year medical student participants. We utilized a retrospective postsession survey to analyze learning objective achievement through a comparison of medians using the Wilcoxon signed rank test (α = .05) for the first 2 years of course implementation. Results: When assessing 158 students' self-perceived abilities to perform each of the learning objectives, students reported significantly higher confidence after the session compared to their retrospective presession confidence for all four learning objectives (ps < .001, respectively). Responses signifying learning objective achievement (scores of 4, probably yes, or 5, definitely yes), when averaged across the first 2 years of implementation, increased from 73% before the session to 98% after the session. Discussion: Our evaluation suggests medical students could benefit from increased educational initiatives on disability culture and health disparities caused by barriers to communication, to strengthen cultural humility, the delivery of health care, and, ultimately, health equity.


Subject(s)
Curriculum , Decision Making, Shared , Disabled Persons , Education, Medical, Undergraduate , Students, Medical , Humans , Students, Medical/psychology , Students, Medical/statistics & numerical data , Retrospective Studies , Education, Medical, Undergraduate/methods , Communication Barriers , Surveys and Questionnaires , Male , Female , Sign Language , Language
2.
CBE Life Sci Educ ; 23(2): ar22, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38709798

ABSTRACT

In recent years, an increasing number of deaf and hard of hearing (D/HH) undergraduates have chosen to study in STEM fields and pursue careers in research. Yet, very little research has been undertaken on the barriers and inclusive experiences often faced by D/HH undergraduates who prefer to use spoken English in research settings, instead of American Sign Language (ASL). To identify barriers and inclusive strategies, we studied six English speaking D/HH undergraduate students working in research laboratories with their eight hearing mentors, and their three hearing peers sharing their experiences. Three researchers observed the interactions between all three groups and conducted interviews and focus groups, along with utilizing the Communication Assessment Self-Rating Scale (CASS). The main themes identified in the findings were communication and environmental barriers in research laboratories, creating accessible and inclusive laboratory environments, communication strategies, and self-advocating for effective communication. Recommendations for mentors include understanding the key elements of creating an inclusive laboratory environment for English speaking D/HH students and effectively demonstrating cultural competence to engage in inclusive practices.


Subject(s)
Students , Humans , Deafness , Male , Female , Persons With Hearing Impairments , Research , Sign Language , Mentors , Language , Communication , Communication Barriers
3.
PLoS One ; 19(4): e0298479, 2024.
Article in English | MEDLINE | ID: mdl-38625906

ABSTRACT

OBJECTIVES: (i) To identify peer reviewed publications reporting the mental and/or physical health outcomes of Deaf adults who are sign language users and to synthesise evidence; (ii) If data available, to analyse how the health of the adult Deaf population compares to that of the general population; (iii) to evaluate the quality of evidence in the identified publications; (iv) to identify limitations of the current evidence base and suggest directions for future research. DESIGN: Systematic review. DATA SOURCES: Medline, Embase, PsychINFO, and Web of Science. ELIGIBILITY CRITERIA FOR SELECTING STUDIES: The inclusion criteria were Deaf adult populations who used a signed language, all study types, including methods-focused papers which also contain results in relation to health outcomes of Deaf signing populations. Full-text articles, published in peer-review journals were searched up to 13th June 2023, published in English or a signed language such as ASL (American Sign Language). DATA EXTRACTION: Supported by the Rayyan systematic review software, two authors independently reviewed identified publications at each screening stage (primary and secondary). A third reviewer was consulted to settle any disagreements. Comprehensive data extraction included research design, study sample, methodology, findings, and a quality assessment. RESULTS: Of the 35 included studies, the majority (25 out of 35) concerned mental health outcomes. The findings from this review highlighted the inequalities in health and mental health outcomes for Deaf signing populations in comparison with the general population, gaps in the range of conditions studied in relation to Deaf people, and the poor quality of available data. CONCLUSIONS: Population sample definition and consistency of standards of reporting of health outcomes for Deaf people who use sign language should be improved. Further research on health outcomes not previously reported is needed to gain better understanding of Deaf people's state of health.


Subject(s)
Outcome Assessment, Health Care , Sign Language , Adult , Humans
4.
PLoS One ; 19(4): e0298699, 2024.
Article in English | MEDLINE | ID: mdl-38574042

ABSTRACT

Sign language recognition presents significant challenges due to the intricate nature of hand gestures and the necessity to capture fine-grained details. In response to these challenges, a novel approach is proposed-Lightweight Attentive VGG16 with Random Forest (LAVRF) model. LAVRF introduces a refined adaptation of the VGG16 model integrated with attention modules, complemented by a Random Forest classifier. By streamlining the VGG16 architecture, the Lightweight Attentive VGG16 effectively manages complexity while incorporating attention mechanisms that dynamically concentrate on pertinent regions within input images, resulting in enhanced representation learning. Leveraging the Random Forest classifier provides notable benefits, including proficient handling of high-dimensional feature representations, reduction of variance and overfitting concerns, and resilience against noisy and incomplete data. Additionally, the model performance is further optimized through hyperparameter optimization, utilizing the Optuna in conjunction with hill climbing, which efficiently explores the hyperparameter space to discover optimal configurations. The proposed LAVRF model demonstrates outstanding accuracy on three datasets, achieving remarkable results of 99.98%, 99.90%, and 100% on the American Sign Language, American Sign Language with Digits, and NUS Hand Posture datasets, respectively.


Subject(s)
Random Forest , Sign Language , Humans , Pattern Recognition, Automated/methods , Gestures , Upper Extremity
5.
Brain Lang ; 252: 105413, 2024 May.
Article in English | MEDLINE | ID: mdl-38608511

ABSTRACT

Sign languages (SLs) are expressed through different bodily actions, ranging from re-enactment of physical events (constructed action, CA) to sequences of lexical signs with internal structure (plain telling, PT). Despite the prevalence of CA in signed interactions and its significance for SL comprehension, its neural dynamics remain unexplored. We examined the processing of different types of CA (subtle, reduced, and overt) and PT in 35 adult deaf or hearing native signers. The electroencephalographic-based processing of signed sentences with incongruent targets was recorded. Attenuated N300 and early N400 were observed for CA in deaf but not in hearing signers. No differences were found between sentences with CA types in all signers, suggesting a continuum from PT to overt CA. Deaf signers focused more on body movements; hearing signers on faces. We conclude that CA is processed less effortlessly than PT, arguably because of its strong focus on bodily actions.


Subject(s)
Comprehension , Deafness , Electroencephalography , Sign Language , Humans , Comprehension/physiology , Adult , Male , Female , Deafness/physiopathology , Young Adult , Brain/physiology , Evoked Potentials/physiology
7.
Sensors (Basel) ; 24(5)2024 Feb 24.
Article in English | MEDLINE | ID: mdl-38475008

ABSTRACT

Sign language serves as the primary mode of communication for the deaf community. With technological advancements, it is crucial to develop systems capable of enhancing communication between deaf and hearing individuals. This paper reviews recent state-of-the-art methods in sign language recognition, translation, and production. Additionally, we introduce a rule-based system, called ruLSE, for generating synthetic datasets in Spanish Sign Language. To check the usefulness of these datasets, we conduct experiments with two state-of-the-art models based on Transformers, MarianMT and Transformer-STMC. In general, we observe that the former achieves better results (+3.7 points in the BLEU-4 metric) although the latter is up to four times faster. Furthermore, the use of pre-trained word embeddings in Spanish enhances results. The rule-based system demonstrates superior performance and efficiency compared to Transformer models in Sign Language Production tasks. Lastly, we contribute to the state of the art by releasing the generated synthetic dataset in Spanish named synLSE.


Subject(s)
Deep Learning , Humans , Sign Language , Hearing , Communication
9.
BMJ ; 384: 2615, 2024 02 28.
Article in English | MEDLINE | ID: mdl-38418094

Subject(s)
Deafness , Sign Language , Humans
11.
Science ; 383(6682): 519-523, 2024 Feb 02.
Article in English | MEDLINE | ID: mdl-38301028

ABSTRACT

Sign languages are naturally occurring languages. As such, their emergence and spread reflect the histories of their communities. However, limitations in historical recordkeeping and linguistic documentation have hindered the diachronic analysis of sign languages. In this work, we used computational phylogenetic methods to study family structure among 19 sign languages from deaf communities worldwide. We used phonologically coded lexical data from contemporary languages to infer relatedness and suggest that these methods can help study regular form changes in sign languages. The inferred trees are consistent in key respects with known historical information but challenge certain assumed groupings and surpass analyses made available by traditional methods. Moreover, the phylogenetic inferences are not reducible to geographic distribution but do affirm the importance of geopolitical forces in the histories of human languages.


Subject(s)
Language , Linguistics , Sign Language , Humans , Language/history , Linguistics/classification , Linguistics/history , Phylogeny
12.
J Biomech ; 165: 112011, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38382174

ABSTRACT

Prior studies suggest that native (born to at least one deaf or signing parent) and non-native signers have different musculoskeletal health outcomes from signing, but the individual and combined biomechanical factors driving these differences are not fully understood. Such group differences in signing may be explained by the five biomechanical factors of American Sign Language that have been previously identified: ballistic signing, hand and wrist deviations, work envelope, muscle tension, and "micro" rests. Prior work used motion capture and surface electromyography to collect joint kinematics and muscle activations, respectively, from ten native and thirteen non-native signers as they signed for 7.5 min. Each factor was individually compared between groups. A factor analysis was used to determine the relative contributions of each biomechanical factor between signing groups. No significant differences were found between groups for ballistic signing, hand and wrist deviations, work envelope volume, excursions from recommended work envelope, muscle tension, or "micro" rests. Factor analysis revealed that "micro" rests had the strongest contribution for both groups, while hand and wrist deviations had the weakest contribution. Muscle tension and work envelope had stronger contributions for native compared to non-native signers, while ballistic signing had a stronger contribution for non-native compared to native signers. Using a factor analysis enabled discernment of relative contributions of biomechanical variables across native and non-native signers that could not be detected through isolated analysis of individual measures. Differences in the contributions of these factors may help explain the differences in signing across native and non-native signers.


Subject(s)
Hand , Sign Language , Humans , United States , Upper Extremity , Wrist , Factor Analysis, Statistical
13.
Sensors (Basel) ; 24(3)2024 Jan 26.
Article in English | MEDLINE | ID: mdl-38339542

ABSTRACT

Japanese Sign Language (JSL) is vital for communication in Japan's deaf and hard-of-hearing community. But probably because of the large number of patterns, 46 types, there is a mixture of static and dynamic, and the dynamic ones have been excluded in most studies. Few researchers have been working to develop a dynamic JSL alphabet, and their performance accuracy is unsatisfactory. We proposed a dynamic JSL recognition system using effective feature extraction and feature selection approaches to overcome the challenges. In the procedure, we follow the hand pose estimation, effective feature extraction, and machine learning techniques. We collected a video dataset capturing JSL gestures through standard RGB cameras and employed MediaPipe for hand pose estimation. Four types of features were proposed. The significance of these features is that the same feature generation method can be used regardless of the number of frames or whether the features are dynamic or static. We employed a Random forest (RF) based feature selection approach to select the potential feature. Finally, we fed the reduced features into the kernels-based Support Vector Machine (SVM) algorithm classification. Evaluations conducted on our proprietary newly created dynamic Japanese sign language alphabet dataset and LSA64 dynamic dataset yielded recognition accuracies of 97.20% and 98.40%, respectively. This innovative approach not only addresses the complexities of JSL but also holds the potential to bridge communication gaps, offering effective communication for the deaf and hard-of-hearing, and has broader implications for sign language recognition systems globally.


Subject(s)
Pattern Recognition, Automated , Sign Language , Humans , Japan , Pattern Recognition, Automated/methods , Hand , Algorithms , Gestures
14.
Sci Rep ; 14(1): 1043, 2024 01 10.
Article in English | MEDLINE | ID: mdl-38200108

ABSTRACT

The impact of adverse listening conditions on spoken language perception is well established, but the role of suboptimal viewing conditions on signed language processing is less clear. Viewing angle, i.e. the physical orientation of a perceiver relative to a signer, varies in many everyday deaf community settings for L1 signers and may impact comprehension. Further, processing from various viewing angles may be more difficult for late L2 learners of a signed language, with less variation in sign input while learning. Using a semantic decision task in a distance priming paradigm, we show that British Sign Language signers are slower and less accurate to comprehend signs shown from side viewing angles, with L2 learners in particular making disproportionately more errors when viewing signs from side angles. We also investigated how individual differences in mental rotation ability modulate processing signs from different angles. Speed and accuracy on the BSL task correlated with mental rotation ability, suggesting that signers may mentally represent signs from a frontal view, and use mental rotation to process signs from other viewing angles. Our results extend the literature on viewpoint specificity in visual recognition to linguistic stimuli. The data suggests that L2 signed language learners should maximise their exposure to diverse signed language input, both in terms of viewing angle and other difficult viewing conditions to maximise comprehension.


Subject(s)
Learning , Sign Language , Humans , Individuality , Linguistics , Physical Examination
15.
Sensors (Basel) ; 24(2)2024 Jan 11.
Article in English | MEDLINE | ID: mdl-38257544

ABSTRACT

Sign language is designed as a natural communication method to convey messages among the deaf community. In the study of sign language recognition through wearable sensors, the data sources are limited, and the data acquisition process is complex. This research aims to collect an American sign language dataset with a wearable inertial motion capture system and realize the recognition and end-to-end translation of sign language sentences with deep learning models. In this work, a dataset consisting of 300 commonly used sentences is gathered from 3 volunteers. In the design of the recognition network, the model mainly consists of three layers: convolutional neural network, bi-directional long short-term memory, and connectionist temporal classification. The model achieves accuracy rates of 99.07% in word-level evaluation and 97.34% in sentence-level evaluation. In the design of the translation network, the encoder-decoder structured model is mainly based on long short-term memory with global attention. The word error rate of end-to-end translation is 16.63%. The proposed method has the potential to recognize more sign language sentences with reliable inertial data from the device.


Subject(s)
Sign Language , Wearable Electronic Devices , Humans , United States , Motion Capture , Neurons , Perception
16.
J Deaf Stud Deaf Educ ; 29(2): 187-198, 2024 Mar 17.
Article in English | MEDLINE | ID: mdl-38073324

ABSTRACT

In Sri Lanka, about 300,000 Sinhala speaking people are either deaf or hard of hearing (DHH) and would benefit from a common Sinhala sign language, technological resources such as captioning, and educational and social support. There is no fully developed common sign language for members of the Sinhalese community, a severe shortage of sign language interpreters, and few resources for teachers. This exploratory study was undertaken in all nine provinces of Sri Lanka into the use of sign language, access to education for people with disabilities, and the availability of trained or qualified educators to work with the DHH people. Data were gathered via interviews and focus groups with Special Education Assistant Directors, Principals and Teachers in Deaf Schools, and Teachers of Special Education Deaf Units in mainstream schools. The DHH members of Sri Lankan society are marginalized, under-supported, and require urgent attention to their educational and social needs. This study provides a basis for much needed attention and reform.


Subject(s)
Hearing Loss , Persons With Hearing Impairments , Humans , Sign Language , Sri Lanka , Education, Special , Hearing
17.
J Subst Use Addict Treat ; 158: 209233, 2024 03.
Article in English | MEDLINE | ID: mdl-38061637

ABSTRACT

INTRODUCTION: Recent research suggests that alcohol use disorder may be more prevalent in the Deaf community, a diverse sociolinguistic minority group. However, rates of treatment-seeking among Deaf individuals are even lower than in the general society. This study used the Theory of Planned Behavior to identify Deaf adults' beliefs about treatment that may prevent their treatment-seeking behaviors. METHODS: This study conducted elicitation interviews with 16 Deaf adults. The study team recruited participants from across the U.S. and conducted interviews on Zoom. Participant ages ranged from 27 to 67 years (M = 40, SD =10.8). Seventy-five percent of the sample was male, 75 % were White, and 12.5 % were Hispanic/Latine. The study conducted interviews in American Sign Language, subsequently interpreted into English by a nationally certified interpreter, and transcribed for data analyses. The study analyzed transcripts using the Framework Method. The study team coded the interviews in groups and assessed for saturation (≤ 5 % new themes) of themes throughout the analysis. This study reached saturation in the third group (six total groups). RESULTS: Identified themes followed the Theory of Planned Behavior constructs. The study identified nine Behavioral Beliefs with four advantages and five disadvantages of seeking treatment, four Normative Beliefs with one support and three oppositions to seeking treatment, and thirteen Control Beliefs with five facilitators and eight barriers to seeking treatment. Overall, the Deaf participants reported several unique beliefs based on their cultural and linguistic perspectives, including a concern about unqualified providers, experiencing stress in treatment with hearing providers, stigma within the Deaf community, less access to cultural information about alcohol and mental health, less encouragement of traditional treatment in marginalized communities, and additional barriers (e.g., communication, limited Deaf treatment options, discrimination, etc.). CONCLUSIONS: A thorough understanding of individual beliefs about treatment is necessary to develop interventions that may increase treatment-seeking behaviors. Previous research has demonstrated that individual beliefs may be modified using Cognitive Behavioral Therapy techniques to increase treatment-seeking behaviors among hearing individuals. Similar interventions may be useful with Deaf individuals; however, they must consider the unique cultural and linguistic perspectives of the community.


Subject(s)
Mental Health , Persons With Hearing Impairments , Adult , Humans , Male , Middle Aged , Aged , Persons With Hearing Impairments/psychology , Communication , Sign Language , Alcohol Drinking
18.
J Deaf Stud Deaf Educ ; 29(2): 115-133, 2024 Mar 17.
Article in English | MEDLINE | ID: mdl-38079616

ABSTRACT

Research has demonstrated that deaf children of deaf signing parents (DOD) are afforded developmental advantages. This can be misconstrued as indicating that no DOD children exhibit early language delays (ELDs) because of their early access to a visual language. Little research has studied this presumption. In this study, we examine 174 ratings of DOD 3- to 5-year-old children, for whom signing in the home was indicated, using archival data from the online database of the Visual Communication and Sign Language Checklist. Our goals were to (1) examine the incidence of ELDs in a cohort of DOD children; (2) compare alternative scaling strategies for identifying ELD children; (3) explore patterns among behavioral ratings with a view toward developing a greater understanding of the types of language behaviors that may lie at the root of language delays; and (4) suggest recommendations for parents and professionals working with language-delayed DOD children. The results indicated that a significant number of ratings suggested ELDs, with a subset significantly delayed. These children likely require further evaluation. Among the less delayed group, ASL skills, rather than communication or cognition, were seen as the major concern, suggesting that even DOD children may require support developing linguistically accurate ASL. Overall, these findings support the need for early and ongoing evaluation of visual language skills in young DOD children.


Subject(s)
Deafness , Sign Language , Humans , Child, Preschool , Language , Parents , Cognition
19.
20.
J Deaf Stud Deaf Educ ; 29(2): 105-114, 2024 Mar 17.
Article in English | MEDLINE | ID: mdl-37973400

ABSTRACT

This case study describes the use of a syntax intervention with two deaf children who did not acquire a complete first language (L1) from birth. It looks specifically at their ability to produce subject-verb-object (SVO) sentence structure in American Sign Language (ASL) after receiving intervention. This was an exploratory case study in which investigators utilized an intervention that contained visuals to help teach SVO word order to young deaf children. Baseline data were collected over three sessions before implementation of a targeted syntax intervention and two follow-up sessions over 3-4 weeks. Both participants demonstrated improvements in their ability to produce SVO structure in ASL in 6-10 sessions. Visual analysis revealed a positive therapeutic trend that was maintained in follow-up sessions. These data provide preliminary evidence that a targeted intervention may help young deaf children with an incomplete L1 learn to produce basic word order in ASL. Results from this case study can help inform the practice of professionals working with signing deaf children who did not acquire a complete L1 from birth (e.g., speech-language pathologists, deaf mentors/coaches, ASL specialists, etc.). Future research should investigate the use of this intervention with a larger sample of deaf children.


Subject(s)
Language , Sign Language , Child , Humans , United States , Language Development , Learning
SELECTION OF CITATIONS
SEARCH DETAIL
...